翻訳と辞書 |
Superintelligence: Paths, Dangers, Strategies : ウィキペディア英語版 | Superintelligence: Paths, Dangers, Strategies
''Superintelligence: Paths, Dangers, Strategies'' (2014) is a book by Swedish philosopher Nick Bostrom. It argues that if machine brains surpass human brains in general intelligence, then this new superintelligence could replace humans as the dominant lifeform on Earth. Sufficiently intelligent machines could improve their own capabilities faster than human computer scientists.〔(Financial Times review ), July 2014〕 As the fate of gorillas now depends more on humans than on the actions of gorillas themselves, so will the fate of future humanity depend on the actions of the machine superintelligence.〔(【引用サイトリンク】url=http://www.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/0199678111 )〕 The outcome could be an existential catastrophe for humans. Bostrom's book has been translated into many languages and is available as an audiobook.〔(The Bookseller Review )〕〔(Audible audiobook )〕 ==Synopsis== It is unknown whether human-level artificial intelligence will arrive in a matter of years, later this century, or not until future centuries. Regardless of the initial timescale, once human-level machine intelligence is developed, a "superintelligent" system that "greatly exceeds the cognitive performance of humans in virtually all domains of interest" would follow surprisingly quickly, possibly even instantaneously. Such a superintelligence would be difficult to control or restrain. While the ultimate goals of superintelligences can vary greatly, a functional superintelligence will spontaneously generate, as natural subgoals, "instrumental goals" such as self-preservation and goal-content integrity, cognitive enhancement, and resource acquisition. For example, an agent whose sole final goal is to solve the Riemann hypothesis (a famous unsolved, mathematical conjecture) could create, and act upon, a subgoal of transforming the entire Earth into some form of computronium (hypothetical "programmable matter") to assist in the calculation. The superintelligence would proactively resist any outside attempts to turn the superintelligence off or otherwise prevent its subgoal completion. In order to prevent such an existential catastrophe, it might be necessary to successfully solve the "AI control problem" for the first superintelligence. The solution might involve instilling the superintelligence with goals that are compatible with human survival and well-being. Solving the control problem is surprisingly difficult because most goals, when translated into machine-implementable code, lead to unforeseen and undesirable consequences.
抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)』 ■ウィキペディアで「Superintelligence: Paths, Dangers, Strategies」の詳細全文を読む
スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース |
Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.
|
|